28 research outputs found

    CAE-COVIDX: automatic covid-19 disease detection based on x-ray images using enhanced deep convolutional and autoencoder

    Get PDF
    Since the first case in 2019, Corona Virus has been spreading all over the world. World Health Organization (WHO) announced that COVID-19 had become an international pandemic. There is an essential section to handle the spreading of the virus by immediate virus detection for patients. Traditional medical detection requires a long time, a specific laboratory, and a high cost. A method for detecting Covid-19 faster compared to common approaches, such as RT-PCR detection, is needed. The method demonstrated that it could produce an X-ray image with higher accuracy and consumed less time. We propose a novel method to extract image features and to classify COVID-19 using deep CNN combined with Autoencoder (AE) dubbed CAE-COVIDX. We evaluated and compared it with the traditional CNN and existing framework VGG16 involving 400 normal images of non-COVID19 and 400 positive COVID-19 diseases. The performance evaluation was conducted using accuracy, confusion matrix, and loss evaluation. Based on experiment results, the CAE-COVIDX framework outperforms previous methods in several testing scenarios. This framework's ability to detect Covid-19 in various nonstandard image X-rays could effectively help medical employers diagnose Covid-19 patients. It is an important factor to decrease the spreading of Covid-19 massively

    Neural Network Classification of Brainwave Alpha Signals in Cognitive Activities

    Get PDF
    The signal produced by human brain waves is one unique feature. Signals carry information and are represented in electrical signals generated from the brain in a typical waveform. Human brain wave activity will always be active even when sleeping. Brain waves will produce different characteristics in different individuals. Physical and behavioral characteristics can be identified from patterns of brain wave activity. This study aims to distinguish signals from each individual based on the characteristics of alpha signals from brain waves produced. Brain wave signals are generated by giving several mental perception tasks measured using an Electroencephalogram (EEG). To get different features, EEG signals are extracted using first-order extraction and are classified using the Neural Network method. The results of this study are typical of the five first-order features used, namely average, standard deviation, skewness, kurtosis, and entropy. The results of pattern recognition training show that 171 successful iterations are carried out with a period of execution of 6 seconds. Performance tests are performed using the Mean Squared Error (MSE) function. The results of the performance tests that were successfully obtained in the pattern test are in the number 0.000994

    Optimized Three Deep Learning Models Based-PSO Hyperparameters for Beijing PM2.5 Prediction

    Get PDF
    Deep learning is a machine learning approach that produces excellent performance in various applications, including natural language processing, image identification, and forecasting. Deep learning network performance depends on the hyperparameter settings. This research attempts to optimize the deep learning architecture of Long short term memory (LSTM), Convolutional neural network (CNN), and Multilayer perceptron (MLP) for forecasting tasks using Particle swarm optimization (PSO), a swarm intelligence-based metaheuristic optimization methodology: Proposed M-1 (PSO-LSTM), M-2 (PSO-CNN), and M-3 (PSO-MLP). Beijing PM2.5 datasets was analyzed to measure the performance of the proposed models. PM2.5 as a target variable was affected by dew point, pressure, temperature, cumulated wind speed, hours of snow, and hours of rain. The deep learning network inputs consist of three different scenarios: daily, weekly, and monthly. The results show that the proposed M-1 with three hidden layers produces the best results of RMSE and MAPE compared to the proposed M-2, M-3, and all the baselines. A recommendation for air pollution management could be generated by using these optimized models

    An Adaptive Trajectory Clustering Method Based on Grid and Density in Mobile Pattern Analysis

    No full text
    Clustering analysis is one of the most important issues in trajectory data mining. Trajectory clustering can be widely applied in the detection of hotspots, mobile pattern analysis, urban transportation control, and hurricane prediction, etc. To obtain good clustering performance, the existing trajectory clustering approaches need to input one or more parameters to calibrate the optimal values, which results in a heavy workload and computational complexity. To realize adaptive parameter calibration and reduce the workload of trajectory clustering, an adaptive trajectory clustering approach based on the grid and density (ATCGD) is proposed in this paper. The proposed ATCGD approach includes three parts: partition, mapping, and clustering. In the partition phase, ATCGD applies the average angular difference-based MDL (AD-MDL) partition method to ensure the partition accuracy on the premise that it decreases the number of the segments after the partition. During the mapping procedure, the partitioned segments are mapped into the corresponding cells, and the mapping relationship between the segments and the cells are stored. In the clustering phase, adopting the DBSCAN-based method, the segments in the cells are clustered on the basis of the calibrated values of parameters from the mapping procedure. The extensive experiments indicate that although the results of the adaptive parameter calibration are not optimal, in most cases, the difference between the adaptive calibration and the optimal is less than 5%, while the run time of clustering can reduce about 95%, compared with the TRACLUS algorithm

    A Segment-Based Trajectory Similarity Measure in the Urban Transportation Systems

    No full text
    With the rapid spread of built-in GPS handheld smart devices, the trajectory data from GPS sensors has grown explosively. Trajectory data has spatio-temporal characteristics and rich information. Using trajectory data processing techniques can mine the patterns of human activities and the moving patterns of vehicles in the intelligent transportation systems. A trajectory similarity measure is one of the most important issues in trajectory data mining (clustering, classification, frequent pattern mining, etc.). Unfortunately, the main similarity measure algorithms with the trajectory data have been found to be inaccurate, highly sensitive of sampling methods, and have low robustness for the noise data. To solve the above problems, three distances and their corresponding computation methods are proposed in this paper. The point-segment distance can decrease the sensitivity of the point sampling methods. The prediction distance optimizes the temporal distance with the features of trajectory data. The segment-segment distance introduces the trajectory shape factor into the similarity measurement to improve the accuracy. The three kinds of distance are integrated with the traditional dynamic time warping algorithm (DTW) algorithm to propose a new segment–based dynamic time warping algorithm (SDTW). The experimental results show that the SDTW algorithm can exhibit about 57%, 86%, and 31% better accuracy than the longest common subsequence algorithm (LCSS), and edit distance on real sequence algorithm (EDR) , and DTW, respectively, and that the sensitivity to the noise data is lower than that those algorithms

    Chinese Event Detection without Triggers Based on Dual Attention

    No full text
    In natural language processing, event detection is a critical step in event extraction, aiming to detect the occurrences of events and categorize them. Currently, the defects of Chinese event detection based on triggers include polysemous triggers and trigger-word mismatches, which reduce the accuracy of event detection models. Therefore, event detection without triggers based on dual attention (EDWTDA), a trigger-free model that can skip the trigger identification process and determine event types directly, is proposed to fix the problems mentioned above. EDWTDA adopts a dual attention mechanism, integrating local and global attention. Local attention captures key semantic information in sentences and simulates hidden event trigger words to solve the problem of trigger-word mismatch, while global attention digs for the context of documents, fixing the problem of polysemous triggers. Besides, event detection is transformed into a binary classification task to avoid problems caused by multiple tags. Meanwhile, the sample imbalance brought about by the transformation is settled with the application of the focal loss function. The experimental results on the ACE 2005 Chinese corpus show that, compared with the best baseline model, JMCEE, the accuracy rate, recall rate, and F1-score of the proposed model increased by 3.40%, 3.90%, and 3.67%, respectively

    A Segment-Based Trajectory Similarity Measure in the Urban Transportation Systems

    No full text
    With the rapid spread of built-in GPS handheld smart devices, the trajectory data from GPS sensors has grown explosively. Trajectory data has spatio-temporal characteristics and rich information. Using trajectory data processing techniques can mine the patterns of human activities and the moving patterns of vehicles in the intelligent transportation systems. A trajectory similarity measure is one of the most important issues in trajectory data mining (clustering, classification, frequent pattern mining, etc.). Unfortunately, the main similarity measure algorithms with the trajectory data have been found to be inaccurate, highly sensitive of sampling methods, and have low robustness for the noise data. To solve the above problems, three distances and their corresponding computation methods are proposed in this paper. The point-segment distance can decrease the sensitivity of the point sampling methods. The prediction distance optimizes the temporal distance with the features of trajectory data. The segment-segment distance introduces the trajectory shape factor into the similarity measurement to improve the accuracy. The three kinds of distance are integrated with the traditional dynamic time warping algorithm (DTW) algorithm to propose a new segment–based dynamic time warping algorithm (SDTW). The experimental results show that the SDTW algorithm can exhibit about 57%, 86%, and 31% better accuracy than the longest common subsequence algorithm (LCSS), and edit distance on real sequence algorithm (EDR) , and DTW, respectively, and that the sensitivity to the noise data is lower than that those algorithms

    Parallel Approach of Adaptive Image Thresholding Algorithm on GPU

    Get PDF
    Image thresholding is used to segment an image into background and foreground using a given threshold. The threshold can be generated using a specific algorithm instead of a pre-defined value obtained from observation or experiment. However, the algorithm involves per pixel operation, histogram calculation, and iterative procedure to search the optimum threshold that is costly for high-resolution images. In this research, parallel implementations on GPU for three adaptive image thresholding methods, namely Otsu, ISODATA, and minimum cross-entropy, were proposed to optimize their computational times to deal with high-resolution images. The approach involves parallel reduction and parallel prefix sum (scan) techniques to optimize the calculation. The proposed approach was tested on various sizes of grayscale images. The result shows that the parallel implementation of three adaptive image thresholding methods on GPU achieves 4-6 speeds up compared to the CPU implementation, reducing the computational time significantly and effectively dealing with high-resolution images.

    Time-Series Deep Learning Models for Reservoir Scheduling Problems Based on LSTM and Wavelet Transformation

    No full text
    In 2022, as a result of the historically exceptional high temperatures that have been observed this summer in several parts of China, particularly in the province of Sichuan, residential demand for energy has increased. Up to 70% of Sichuan’s electricity comes from hydropower, thus creating a sensible and practical reservoir scheduling plan is essential to maximizing reservoir power generating efficiency. However, classical optimization, such as back propagation (BP) neural network, does not take into account the correlation of samples in time while generating reservoir scheduling rules. We proposed a prediction model based on LSTM neural network coupled with wavelet transformation (WT-LSTM) to address the problem. In order to extract the reservoir scheduling rules, this paper first gathers the scheduling operation data from the Xiluodu hydropower station and creates a dataset. Next, it uses the feature of the time-series prediction model with the realization of a complex nonlinear mapping function, time-series learning capability, and high prediction accuracy. The results demonstrate that the time-series deep learning network has high learning capability for reservoir scheduling by comparing evaluation indexes such as root mean square error (RMSE), rank-sum ratio (RSR), and Nash–Sutcliffe efficiency (NSE). The WT-LSTM network model put forward in this research offers better prediction accuracy than conventional recurrent neural networks and serves as a reference base for scheduling decisions by learning previous scheduling data to produce outflow solutions, which has some theoretical and practical benefits
    corecore